880 research outputs found

    Bayesian econometrics:conjugate analysis and rejection sampling using mathematica

    Get PDF
    Mathematica is a powerful "system for doing mathematics by computer" which runs on personal computers (Macs and MS-DOS machines), workstations and mainframes. Here we show how Bayesian methods can be implemented in Mathematica. One of the drawbacks of Bayesian techniques is that they are computation-intensive, and every computation is a little different. Since Mathematica is so flexible, it can easily be adapted to solving a number of different Bayesian estimation problems. We illustrate the use of Mathematica functions (i) in a traditional conjugate analysis of the linear regression model and (ii) in a completely nonstandard model -where rejection sampling is used to sample from the posterior

    Robust bayesian inference in empirical regression models

    Get PDF
    Broadening the stochastic assumptions on the error terms of regression models was prompted by the analysis of linear multivariate t models in Zellner (1976). We consider a possible non-linear regression model under any multivariate elliptical data density, and examine Bayesian posterior and productive results. The latter are shown to be robust with respect to the specific choice of a sampling density within this elliptical class. In particular, sufficient conditions for such model robustness are that we single out a precision factor T2 on which we can specify an improper prior density. Apart from the posterior distribution of this nuisance parameter T 2, the entire analysis will then be completely unaffected by departures from Normality. Similar results hold in finite mixtures of such elliptical densities, which can be used to average out specification uncertainty

    A decision theoretic analysis of the unit root hypothesis using mixtures of elliptical models

    Get PDF
    This paper develops a formal decision theoretic approach to testing for a unit root in economic time series. The approach is empirically implemented by specifying a loss function based on predictive variances; models are chosen so as to minimize expected loss. In addition, the paper broadens the class of likelihood functions traditionally considered in the Bayesian unit root literature by: i) Allowing for departures from normality via the specification of a likelihood based on general elliptical densities; ii) allowing for structural breaks to occur; iii) allowing for moving average errors; and iv) using mixtures of various submodels to create a very flexible overall likelihood. Empirical results indicate that, while the posterior probability of trend-stationarity is quite high for most of the series considered, the unit root model is often selected in the decision theoretic analysis

    Robust Bayesian inference in Iq-Spherical models

    Get PDF
    The class of multivariate lq-spherical distributions is introduced and defined through their isodensity surfaces. We prove that, under a Jeffreys' type improper prior on the scale parameter, posterior inference on the location parameters is the same for all lq-spherical sampling models with common q. This gives us perfect inference robustness with respect to any departures from the reference case of independent sampling from the exponential power distribution

    Rejection sampling in demand systems

    Get PDF
    We illustrate the method of rejection sampling in a Bayesian application of a new approach toı estimating Demand Systems. This approach, suggested by Varian (1990), is based on a generalization of Afriat's (1967) efficiency index. Rejection sampling is applied to the prior-to-posterior mapping enabling us to obtain posterior results in a nonstandard model

    Bayesian Stochastic Frontier Analysis Using WinBUGS

    Get PDF
    Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented.Efficiency, Markov chain Monte Carlo, Model comparison, Regularity, Software

    On choosing mixture components via non-local priors

    Get PDF
    Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly-separated components of limited practical use. We formalize non-local priors (NLPs) for mixtures and show how they lead to well-separated components with non-negligible weight, interpretable as distinct subpopulations. We also propose an estimator for posterior model probabilities under local and non-local priors, showing that Bayes factors are ratios of posterior to prior empty-cluster probabilities. The estimator is widely applicable and helps set thresholds to drop unoccupied components in overfitted mixtures. We suggest default prior parameters based on multi-modality for Normal/T mixtures and minimal informativeness for categorical outcomes. We characterise theoretically the NLP-induced sparsity, derive tractable expressions and algorithms. We fully develop Normal, Binomial and product Binomial mixtures but the theory, computation and principles hold more generally. We observed a serious lack of sensitivity of the Bayesian information criterion (BIC), insufficient parsimony of the AIC and a local prior, and a mixed behavior of the singular BIC. We also considered overfitted mixtures, their performance was competitive but depended on tuning parameters. Under our default prior elicitation NLPs offered a good compromise between sparsity and power to detect meaningfully-separated components

    Revised stochastic analysis of an input-output model

    Get PDF
    A main difficulty of regional analysis is the inaccuracy of regional input-output data. A natural framework for investigation is stochastic input-output analysis. In his study of Central Queensland, West (1986) assumes that input coefficients are normally distributed and derives formulas for the approximation of input-output multipliers means and variances. In his normality framework, these moments do not exits, however. Moreover, an inconsistency in the derivation will be exposed. We remedy these shortcomings by respecification of the stochastic structure and by direct evaluation of the moments through Monte Carlo calculations. West's formulas are quite accurate for an aggregated version of his data set. The leading terms of the formulas can be shown to be first order approximations to the means and the variances

    Alternative efficiency measures for multiple-output production

    Get PDF
    This paper has two main purposes. Firstly, we develop various ways of defining efficiency in the case of multiple-output production. Our framework extends a previous model by allowing for nonseparability of inputs and outputs. We also specifically consider the case where some of the outputs are undesirable, such as pollutants. We investigate how these efficiency definitions relate to one another and to other approaches proposed in the literature. Secondly, we examine the behavior of these definitions in two examples of practically relevant size and complexity. One of these involves banking and the other agricultural data. Our main findings can be summarized as follows. For a given efficiency definition, efficiency rankings are found to be informative, despite the considerable uncertainty in the inference on efficiencies. It is, however, important for the researcher to select an efficiency concept appropriate to the particular issue under study, since different efficiency definitions can lead to quite different conclusions
    corecore